33 research outputs found

    The smoothed number of {P}areto-optimal solutions in bicriteria integer optimization

    Get PDF

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    Autoantibodies against NMDA receptor 1 modify rather than cause encephalitis

    Get PDF
    The etiology and pathogenesis of “anti-N-methyl-D-aspartate-receptor (NMDAR) encephalitis” and the role of autoantibodies (AB) in this condition are still obscure. While NMDAR1-AB exert NMDAR-antagonistic properties by receptor internalization, no firm evidence exists to date that NMDAR1-AB by themselves induce brain inflammation/encephalitis. NMDAR1-AB of all immunoglobulin classes are highly frequent across mammals with multiple possible inducers and boosters. We hypothesized that “NMDAR encephalitis” results from any primary brain inflammation coinciding with the presence of NMDAR1-AB, which may shape the encephalitis phenotype. Thus, we tested whether following immunization with a “cocktail” of 4 NMDAR1 peptides, induction of a spatially and temporally defined sterile encephalitis by diphtheria toxin-mediated ablation of pyramidal neurons (“DTA” mice) would modify/aggravate the ensuing phenotype. In addition, we tried to replicate a recent report claiming that immunizing just against the NMDAR1-N368/G369 region induced brain inflammation. Mice after DTA induction revealed a syndrome comprising hyperactivity, hippocampal learning/memory deficits, prefrontal cortical network dysfunction, lasting blood brain-barrier impairment, brain inflammation, mainly in hippocampal and cortical regions with pyramidal neuronal death, microgliosis, astrogliosis, modest immune cell infiltration, regional atrophy, and relative increases in parvalbumin-positive interneurons. The presence of NMDAR1-AB enhanced the hyperactivity (psychosis-like) phenotype, whereas all other readouts were identical to control-immunized DTA mice. Non-DTA mice with or without NMDAR1-AB were free of any encephalitic signs. Replication of the reported NMDAR1-N368/G369-immunizing protocol in two large independent cohorts of wild-type mice completely failed. To conclude, while NMDAR1-AB can contribute to the behavioral phenotype of an underlying encephalitis, induction of an encephalitis by NMDAR1-AB themselves remains to be proven

    Worst-case and smoothed analysis of k-means clustering with Bregman divergences

    Get PDF
    The kk-means method is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice despite its exponential worst-case running-time. To narrow the gap between theory and practice, kk-means has been studied in the semi-random input model of smoothed analysis, which often leads to more realistic conclusions than mere worst-case analysis. For the case that nn data points in Rd\mathbb{R}^d are perturbed by Gaussian noise with standard deviation σ\sigma, it has been shown that the expected running-time is bounded by a polynomial in nn and 1/σ1/\sigma. This result assumes that squared Euclidean distances are used as the distance measure. In many applications, however, data is to be clustered with respect to Bregman divergences rather than squared Euclidean distances. A prominent example is the Kullback-Leibler divergence (a.k.a. relative entropy) that is commonly used to cluster web pages. To broaden the knowledge about this important class of distance measures, we analyze the running-time of the kk-means method for Bregman divergences. We first give a smoothed analysis of kk-means with (almost) arbitrary Bregman divergences, and we show bounds of poly(nkn^{\sqrt{k}}), 1/σ1/\sigma) and kkdk^{kd}.poly(nn,1/σ1/\sigma). The latter yields a polynomial bound if kk and dd are small compared to nn. On the other hand, we show that the exponential lower bound carries over to a huge class of Bregman divergences

    Comparing and taming the reactivity of HWE and Wittig reagents with cyclic hemiacetals

    Get PDF
    A practical solution to the formation of mixtures of E/Z and open/cyclic isomers in the reaction of (2R,4S)-4-hydroxy-2-methylpentanal (as its hemiacetal, a lactol) with conjugated phosphoranes (stabilised Wittig reagents) and Horner-Wadsworth-Emmons reagents is disclosed. The HWE reaction has a strong bias to give oxolanes. On the other hand, stabilised Wittig reagents give unsaturated carboxyl derivatives of configuration E (major) and oxolanes (minor); the latter can be avoided by addition of CF3CH2OH or using morpholine amide phosphorane

    Pure Nash equilibria in player-specific and weighted congestion games

    No full text
    Unlike standard congestion games, weighted congestion games and congestion games with player-specific delay functions do not necessarily possess pure Nash equilibria. It is known, however, that there exist pure equilibria for both of these variants in the case of singleton congestion games, i.e., if the players' strategy spaces contain only sets of cardinality one. In this paper, we investigate how far such a property on the players' strategy spaces guaranteeing the existence of pure equilibria can be extended. We show that both weighted and player-specific congestion games admit pure equilibria in the case of matroid congestion games, i.e., if the strategy space of each player consists of the bases of a matroid on the set of resources. We also show that the matroid property is the maximal property that guarantees pure equilibria without taking into account how the strategy spaces of different players are interweaved. Additionally, our analysis of player-specific matroid congestion games yields a polynomial time algorithm for computing pure equilibria. We also address questions related to the convergence time of such games. For player-specific matroid congestion games, in which the best response dynamics may cycle, we show that from every state there exists a short sequences of better responses to an equilibrium. For weighted matroid congestion games, we present a superpolynomial lower bound on the convergence time of the best response dynamics showing that players do not even converge in pseudopolynomial time

    Uncoordinated Two-Sided Matching Markets.

    No full text

    Path trading : fast algorithms, smoothed analysis, and hardness results

    No full text
    The Border Gateway Protocol (BGP) serves as the main routing protocol of the Internet and ensures network reachability among autonomous systems (ASes). When traffic is forwarded between the many ASes on the Internet according to that protocol, each AS selfishly routes the traffic inside its own network according to some internal protocol that supports the local objectives of the AS. We consider possibilities of achieving higher global performance in such systems while maintaining the objectives and costs of the individual ASes. In particular, we consider how path trading, i.e. deviations from routing the traffic using individually optimal protocols, can lead to a better global performance. Shavitt and Singer ("Limitations and Possibilities of Path Trading between Autonomous Systems", INFOCOM 2010) were the first to consider the computational complexity of finding such path trading solutions. They show that the problem is weakly NP-hard and provide a dynamic program to find path trades between pairs of ASes. In this paper we improve upon their results, both theoretically and practically. First, we show that finding path trades between sets of ASes is also strongly NP-hard. Moreover, we provide an algorithm that finds all Pareto-optimal path trades for a pair of two ASes. While in principal the number of Pareto-optimal path trades can be exponential, in our experiments this number was typically small. We use the framework of smoothed analysis to give theoretical evidence that this is a general phenomenon, and not only limited to the instances on which we performed experiments. The computational results show that our algorithm yields far superior running times and can solve considerably larger instances than the previous dynamic program
    corecore